SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Extended search

Träfflista för sökning "hsv:(TEKNIK OCH TEKNOLOGIER) hsv:(Elektroteknik och elektronik) hsv:(Reglerteknik) ;pers:(Johansson Mikael);mspu:(licentiatethesis)"

Search: hsv:(TEKNIK OCH TEKNOLOGIER) hsv:(Elektroteknik och elektronik) hsv:(Reglerteknik) > Johansson Mikael > Licentiate thesis

  • Result 1-10 of 10
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Andersson, Malin (author)
  • Aging sensitive battery control
  • 2022
  • Licentiate thesis (other academic/artistic)abstract
    • The battery is a component with significant impact on both the cost and environmental footprint of a full electric vehicle (EV). Consequently, there is a strong motivation to maximize its degree of utilization. Usage limits are enforced by the battery management system (BMS) to ensure safe operation and limit battery degradation. The limits tend to be conservative to account for uncertainty in battery state estimation as well as changes in the battery's characteristics due to aging. To improve the utilization degree, aging sensitive battery control is necessary. This refers to control that a) adjusts during the battery's life based on its state and b) balances the trade-off between utilization and degradation according to requirements from the specific application. In state-of-the-art battery installations, only three signals are measured; current, voltage and temperature. However, the battery's behaviour is governed by other states that must be estimated such as its state-of-charge (SOC) or local concentrations and potentials. The BMS therefore relies on models to estimate states and to perform control actions. In order to realize points a) and b), the models that are used for state estimation and control must be updated onboard. An updated model can also serve the purpose of diagnosing the battery, since it reflects the changing properties of an aging battery. This thesis investigates identification of physics-based and empirical battery models from operational EV data. The work is divided into three main studies.1) A global sensitivity analysis was performed on the parameters of a high-order physics-based model. Measured current profiles from real EV:s were used as input and the parameters' impact on both modelled cell voltage and other internal states was assessed. The study revealed that in order to excite all model parameters, an input with high current rates, large SOC span and longer charge or discharge periods was required. This was only present in the data set from an electric truck with few battery packs. Data sets from vehicles with more packs (electric bus) and limited SOC operating window (plug-in hybrid truck) excited fewer model parameters.2) Empirical linear-parameter-varying (LPV) dynamic models were identified on driving data. Model parameters were formulated as functions of the measured temperature, current magnitude and estimated open circuit voltage (OCV). To handle the time-scale differences in battery voltage response, continuous-time system identification was employed. We concluded that the proposed models had superior predictive abilities compared to discrete and time-invariant counterparts. 3) Instead of using driving data to parametrize models, we also investigated the possibility to design the charging current in order to increase its information content about model parameters. This was formulated as an optimal control problem with charging speed and information content as objectives. To also take battery degradation into account, constraints on polarization was included. The results showed that parameter information can be increased without significant increase in charge time nor aging related stress.
  •  
2.
  • Arnström, Daniel, 1994- (author)
  • On Complexity Certification of Active-Set QP Methods with Applications to Linear MPC
  • 2021
  • Licentiate thesis (other academic/artistic)abstract
    • In model predictive control (MPC) an optimization problem has to be solved at each time step, which in real-time applications makes it important to solve these efficiently and to have good upper bounds on worst-case solution time. Often for linear MPC problems, the optimization problem in question is a quadratic program (QP) that depends on parameters such as system states and reference signals. A popular class of methods for solving such QPs is active-set methods, where a sequence of linear systems of equations is solved. The primary contribution of this thesis is a method which determines which sequence of subproblems a popular class of such active-set algorithms need to solve, for every possible QP instance that might arise from a given linear MPC problem (i.e, for every possible state and reference signal). By knowing these sequences, worst-case bounds on how many iterations, floating-point operations and, ultimately, the maximum solution time, these active-set algorithms require to compute a solution can be determined, which is of importance when, e.g, linear MPC is used in safety-critical applications. After establishing this complexity certification method, its applicability is extended by showing how it can be used indirectly to certify the complexity of another, efficient, type of active-set QP algorithm which reformulates the QP as a nonnegative least-squares method. Finally, the proposed complexity certification method is extended further to situations when enhancements to the active-set algorithms are used, namely, when they are terminated early (to save computations) and when outer proximal-point iterations are performed (to improve numerical stability). 
  •  
3.
  • Aytekin, Arda, 1986- (author)
  • Asynchronous Algorithms for Large-Scale Optimization : Analysis and Implementation
  • 2017
  • Licentiate thesis (other academic/artistic)abstract
    • This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution.The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the incremental prox-gradient descent method. Contrary to related proposals in the literature, we establish delay-insensitive convergence results: the proposed algorithms converge under any bounded information delay, and their constant step-size can be selected independently of the delay bound.Then, we shift focus to solving constrained, possibly non-smooth, optimization problems in a distributed memory architecture. This time, we propose and analyze two important families of gradient descent algorithms: asynchronous mini-batching and incremental aggregated gradient descent. In particular, for asynchronous mini-batching, we show that, by suitably choosing the algorithm parameters, one can recover the best-known convergence rates established for delay-free implementations, and expect a near-linear speedup with the number of computing nodes. Similarly, for incremental aggregated gradient descent, we establish global linear convergence rates for any bounded information delay.Extensive simulations and actual implementations of the algorithms in different platforms on representative real-world problems validate our theoretical results.
  •  
4.
  • Biel, Martin (author)
  • Distributed Stochastic Programming with Applications to Large-Scale Hydropower Operations
  • 2019
  • Licentiate thesis (other academic/artistic)abstract
    • Stochastic programming is a subfield of mathematical programming concerned with optimization problems subjected to uncertainty. Many engineering problems with random elements can be accurately modeled as a stochastic program. In particular, decision problems associated with hydropower operations motivate the application of stochastic programming. When complex decision-support problems are considered, the corresponding stochastic programming models often grow too large to store and solve on a single computer. This clarifies the need for parallel approaches that could enable efficient treatment of large-scale stochastic programs in a distributed environment. In this thesis, we develop mathematical and computational tools in order to facilitate distributed stochastic programs that can be efficiently stored and solved.First, we present a software framework for stochastic programming implemented in the Julia language. A key feature of the framework is the support for distributing stochastic programs in memory. Moreover, the framework includes a large set of structure-exploiting algorithms for solving stochastic programming problems. These algorithms are based on the classical L-shaped and progressive-hedging algorithms and can run in parallel on distributed stochastic programs. The distributed performance of our software tools is improved by exploring algorithmic innovations and software patterns. We present the architecture of the framework and highlight key implementation details. Finally, we provide illustrative examples of stochastic programming functionality and benchmarks on large-scale problems.Then, we pursue further algorithmic improvements to the distributed L-shaped algorithm. Specifically, we consider the use of dynamic cut aggregation. We develop theoretical results on convergence and complexity and then showcase performance improvements in numerical experiments. We suggest several aggregation schemes that are based on parameterized selection rules. Before we perform large-scale experiments, the aggregation parameters are determined by a tuning procedure. In brief, cut aggregation can yield major performance improvements to L-shaped algorithms in distributed settings.Finally, we consider an application to hydropower operations. The day-ahead planning problem involves specifying optimal order volumes in a deregulated electricity market, without knowledge of the next-day market price, and then optimizing the hydropower production. We provide a detailed introduction to the day-ahead model and explain how we can implement it with our computational tools. This covers a complete procedure of gathering data, generating forecasts from the data, and finally formulating and solving a stochastic programming model of the day-ahead problem. Using a sample-based algorithm that internally relies on our structure-exploiting solvers, we obtain tight confidence intervals around the optimal solution of the day-ahead problem.
  •  
5.
  • Demirel, Burak, 1984- (author)
  • Design and Performance Analysis of Wireless Networked Control Systems
  • 2013
  • Licentiate thesis (other academic/artistic)abstract
    • Networked control systems (NCSs) are distributed systems that use shared communication networks to exchange information between system components such as sensors, controllers and actuators. The networked control system architecture promises advantages in terms of increased flexibility, reduced wiring and lower maintenance costs, and is finding its way into a wide variety of applications, ranging from automobiles and automated highway systems to process control, and power distribution systems. However, NCSs also pose many challenges in their analysis and design, since transmitting signals over wireless networks has several side effects, such as: (i) variable sampling intervals, (ii) variable communication delays, (iii) packet losses caused by the unreliability of the network. In this thesis, we aim at developing three different design frameworks, which take some of these side effects into account for improving the performance of the overall system.This thesis firstly presents the joint design of packet forwarding policies and controllers for wireless control loops where sensor measurements are sent to the controller over an unreliable and energy--constrained multi--hop wireless network. For fixed sampling rate of the sensor, the co--design problem separates into two well-defined and independent subproblems: transmission scheduling for maximizing the deadline--constrained reliability and optimal control under packet loss. We develop optimal and implementable solutions for these subproblems and show that the optimally co--designed system can be efficiently found. Numerical examples highlight the many trade-offs involved and demonstrate the power of our approach.Secondly, this thesis proposes a supervisory control structure for networked systems with time-varying delays. The control structure, in which a supervisor triggers the most appropriate controller from a multi-controller unit, aims at improving the closed-loop performance relative to what can be obtained using a single robust controller. Our analysis considers average dwell-time switching and is based on a novel multiple Lyapunov-Krasovskii functional. We develop stability conditions that can be verified by semi-definite programming, and show that the associated state feedback synthesis problem also can be solved using convex optimization tools. Extensions of the analysis and synthesis procedures to the case when the evolution of the delay mode is described by a Markov chain are also developed. Simulations on small- and large-scale networked control systems are used to illustrate the effectiveness of our approach.Lastly, we consider an event--triggered control framework for a linear time--invariant process. We introduce a range based event--triggering algorithm that is used to transmit information from the controller to the actuator. We also analytically characterize the control performance and communication rate for a given event threshold. Additionally, we provide a systematic way to analyze the trade--off between the communication rate and control performance by appropriately selecting an event threshold. Using numerical examples, we demonstrate the effectiveness of the proposed framework. 
  •  
6.
  • Flärdh, Oscar, 1980- (author)
  • Modelling, analysis and experimentation of a simple feedback scheme for error correction control
  • 2007
  • Licentiate thesis (other academic/artistic)abstract
    • Data networks are an important part in an increasing number of applications with real-time and reliability requirements. To meet these demands a variety of approaches have been proposed. Forward error correction, which adds redundancy to the communicated data, is one of them. However, the redundancy occupies communication bandwidth, so it is desirable to control the amount of redundancy in order to achieve high reliability without adding excessive communication delay. The main contribution of the thesis is to formulate the problem of adjusting the redundancy in a control framework, which enables the dynamic properties of error correction control to be analyzed using control theory. The trade-off between application quality and resource usage is captured by introducing an optimal control problem. Its dependence on the knowledge of the network state at the transmission side is discussed. An error correction controller that optimizes the amount of redundancy without relying on network state information is presented. This is achieved by utilizing an extremum seeking control algorithm to optimize the cost function. Models with varying complexity of the resulting feedback system are presented and analyzed. Conditions for convergence are given. Multiple-input describing function analysis is used to examine periodic solutions. The results are illustrated through computer simulations and experiments on a wireless sensor network.
  •  
7.
  • Khirirat, Sarit (author)
  • First-Order Algorithms for Communication Efficient Distributed Learning
  • 2019
  • Licentiate thesis (other academic/artistic)abstract
    • Technological developments in devices and storages have made large volumes of data collections more accessible than ever. This transformation leads to optimization problems with massive data in both volume and dimension. In response to this trend, the popularity of optimization on high performance computing architectures has increased unprecedentedly. These scalable optimization solvers can achieve high efficiency by splitting computational loads among multiple machines. However, these methods also incur large communication overhead. To solve optimization problems with millions of parameters, communication between machines has been reported to consume up to 80% of the training time. To alleviate this communication bottleneck, many optimization algorithms with data compression techniques have been studied. In practice, they have been reported to significantly save communication costs while exhibiting almost comparable convergence as the full-precision algorithms. To understand this intuition, we develop theory and techniques in this thesis to design communication-efficient optimization algorithms.In the first part, we analyze the convergence of optimization algorithms with direct compression. First, we outline definitions of compression techniques which cover many compressors of practical interest. Then, we provide the unified analysis framework of optimization algorithms with compressors which can be either deterministic or randomized. In particular, we show how the tuning parameters of compressed optimization algorithms must be chosen to guarantee performance. Our results show explicit dependency on compression accuracy and delay effect due to asynchrony of algorithms. This allows us to characterize the trade-off between iteration and communication complexity under gradient compression.In the second part, we study how error compensation schemes can improve the performance of compressed optimization algorithms. Even though convergence guarantees of optimization algorithms with error compensation have been established, there is very limited theoretical support which guarantees improved solution accuracy. We therefore develop theoretical explanations, which show that error compensation guarantees arbitrarily high solution accuracy from compressed information. In particular, error compensation helps remove accumulated compression errors, thus improving solution accuracy especially for ill-conditioned problems. We also provide strong convergence analysis of error compensation on parallel stochastic gradient descent across multiple machines. In particular, the error-compensated algorithms, unlike direct compression, result in significant reduction in the compression error.Applications of the algorithms in this thesis to real-world problems with benchmark data sets validate our theoretical results.
  •  
8.
  • Petrosyan, Vahan, 1990- (author)
  • Fast, Robust and Scalable Clustering Algorithms with Applications in Computer Vision
  • 2018
  • Licentiate thesis (other academic/artistic)abstract
    • In this thesis, we address a number of challenges in cluster analysis. We begin by investigating one of the oldest and most challenging problems: determining the number of clusters, k. For this problem, we propose a novel solution that, unlike previous techniques, delivers both the number of clusters and the clusters in one-shot (in contrast, conventional techniques run a given clustering algorithm several times for different values of k, and/or for several initialization with the same k).The second challenge we treat is the drawback, briefly mentioned above, of many conventional iterative clustering algorithms: how should they be initialized? We propose an initialization scheme that is applicable to multiple iterative clustering techniques that are widely used in practice (e.g., spectral clustering, EM-based, k-means). Numerical simulations demonstrate a significant improvement compared to many state-of-the-art initialization techniques.Third, we consider the computation of pairwise similarities between datapoints. A matrix of such similarities (the similarity matrix) constitutes the backbone of many clustering as well as unsupervised learning algorithms. In particular, for a given similarity metric, we propose a similarity transformation that promotes high similarity between the points within the cluster and deceases the similarity within the cluster overlapping regions. The transformation is particularly well-suited for many clustering and dimensionality reduction techniques, which we demonstrate in extensive numerical experiments.Finally, we investigate the application of clustering algorithms to image and video datasets (also known as superpixel segmentation). We propose a segmentation algorithm that significantly outperforms current state-of-the-art algorithms; both in terms of runtime as well as standard accuracy metrics. Based on this algorithm, we develop a tool for fast and accurate image annotation. Our findings show that our annotation technique accelerates the annotation processes by up to 20 times, without compromising the quality. This indicates a big opportunity to significantly speed up all AI computer vision tasks, since image annotation forms a crucial step in creating training data.
  •  
9.
  • Van Mai, Vien, 1990- (author)
  • Large-Scale Optimization With Machine Learning Applications
  • 2019
  • Licentiate thesis (other academic/artistic)abstract
    • This thesis aims at developing efficient algorithms for solving some fundamental engineering problems in data science and machine learning. We investigate a variety of acceleration techniques for improving the convergence times of optimization algorithms.  First, we investigate how problem structure can be exploited to accelerate the solution of highly structured problems such as generalized eigenvalue and elastic net regression. We then consider Anderson acceleration, a generic and parameter-free extrapolation scheme, and show how it can be adapted to accelerate practical convergence of proximal gradient methods for a broad class of non-smooth problems. For all the methods developed in this thesis, we design novel algorithms, perform mathematical analysis of convergence rates, and conduct practical experiments on real-world data sets.
  •  
10.
  • Åstrand, Max (author)
  • Short-term Underground Mine Scheduling : Constraint Programming in an Industrial Application
  • 2018
  • Licentiate thesis (other academic/artistic)abstract
    • The operational performance of an underground mine depends critically on how the production is scheduled. Increasingly advanced methods are used to create optimized long-term plans, and simultaneously the actual excavation is getting more and more automated. Therefore, the mapping of long-term goals into tasks by manual short-term scheduling is becoming a limiting segment in the optimization chain. In this thesis we study automating the short-term mine scheduling process, and thus contribute to an important missing piece in the pursuit of autonomous mining.First, we clarify the fleet scheduling problem and the surrounding context. Based on this knowledge, we propose a flow shop that models the mine scheduling problem. A flow shop is a general abstract process formulation that captures the key properties of a scheduling problem without going into specific details. We argue that several popular mining methods can be modeled as a rich variant of a k-stage hybrid flow shop, where the flow shop includes a mix of interruptible and uninterruptible tasks, after-lags, machine unavailabilities, and sharing of machines between stages.Then, we propose a Constraint Programming approach to schedule the underground production fleet. We formalize the problem and present a model that can be used to solve it. The model is implemented and evaluated on instances representative of medium-sized underground mines.After that, we introduce travel times of the mobile machines to the scheduling problem. This acknowledges that underground road networks can span several hundreds of kilometers. With this addition, the initially proposed Constraint Programming model struggles with scaling to larger instances. Therefore, we introduce a second model. The second model does not solve the interruptible scheduling problem directly; instead, it solves a related uninterruptible problem and transforms the solution back to the original time domain. This model is significantly faster, and can solve instances representative of large-sized mines even when including travel times.Lastly, we focus on finding high-quality schedules by introducing Large Neighborhood Search. To do this, we present a domain-specific neighborhood definition based on relaxing variables corresponding to certain work areas. Variants of this neighborhood are evaluated in Large Neighborhood Search and compared to using only restarts. All methods and models in this thesis are evaluated on instances generated from an operational underground mine.  
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 10

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view